Stochastic Submodular Maximization
نویسندگان
چکیده
We study stochastic submodular maximization problem with respect to a cardinality constraint. Our model can capture the effect of uncertainty in different problems, such as cascade effects in social networks, capital budgeting, sensor placement, etc. We study non-adaptive and adaptive policies and give optimal constant approximation algorithms for both cases. We also bound the adaptivity gap of the problem between 1.21 and 1.59.
منابع مشابه
Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap
In this paper, we study the problem of constrained and stochastic continuous submodular maximization. Even though the objective function is not concave (nor convex) and is defined in terms of an expectation, we develop a variant of the conditional gradient method, called Stochastic Continuous Greedy, which achieves a tight approximation guarantee. More precisely, for a monotone and continuous D...
متن کاملStochastic Submodular Maximization: The Case of Coverage Functions
Stochastic optimization of continuous objectives is at the heart of modern machine learning. However, many important problems are of discrete nature and often involve submodular objectives. We seek to unleash the power of stochastic continuous optimization, namely stochastic gradient descent and its variants, to such discrete problems. We first introduce the problem of stochastic submodular opt...
متن کاملDiscrete Stochastic Submodular Maximization: Adaptive vs. Non-adaptive vs. Offline
We consider the problem of stochastic monotone submodular function maximization, subject to constraints. We give results on adaptivity gaps, and on the gap between the optimal offline and online solutions. We present a procedure that transforms a decision tree (adaptive algorithm) into a non-adaptive chain. We prove that this chain achieves at least τ times the utility of the decision tree, ove...
متن کاملDeterministic & Adaptive Non-Submodular Maximizationvia the Primal Curvature
While greedy algorithms have long been observed to perform well on a wide variety of problems, up to now approximation ratios have only been known for their application to problems having submodular objective functions f . Since many practical problems have non-submodular f , there is a critical need to devise new techniques to bound the performance of greedy algorithms in the case of non-submo...
متن کاملSubmodular Mini-Batch Training in Generative Moment Matching Networks
Generative moment matching network (GMMN), which is based on the maximum mean discrepancy (MMD) measure, is a generative model for unsupervised learning, where the mini-batch stochastic gradient descent is applied for the update of parameters. In this work, instead of obtaining a mini-batch randomly, each mini-batch in the iterations is selected in a submodular way such that the most informativ...
متن کامل